86 research outputs found

    On Approximating Restricted Cycle Covers

    Get PDF
    A cycle cover of a graph is a set of cycles such that every vertex is part of exactly one cycle. An L-cycle cover is a cycle cover in which the length of every cycle is in the set L. The weight of a cycle cover of an edge-weighted graph is the sum of the weights of its edges. We come close to settling the complexity and approximability of computing L-cycle covers. On the one hand, we show that for almost all L, computing L-cycle covers of maximum weight in directed and undirected graphs is APX-hard and NP-hard. Most of our hardness results hold even if the edge weights are restricted to zero and one. On the other hand, we show that the problem of computing L-cycle covers of maximum weight can be approximated within a factor of 2 for undirected graphs and within a factor of 8/3 in the case of directed graphs. This holds for arbitrary sets L.Comment: To appear in SIAM Journal on Computing. Minor change

    Approximately Fair Cost Allocation in Metric Traveling Salesman Games

    Get PDF
    A traveling salesman game is a cooperative game G=(N,cD){\mathcal{G}}=(N,c_{D}) . Here N, the set of players, is the set of cities (or the vertices of the complete graph) andc D is the characteristic function where D is the underlying cost matrix. For all S⊆N, define c D (S) to be the cost of a minimum cost Hamiltonian tour through the vertices of SâˆȘ{0} where 0∈̞N0\not \in N is called as the home city. Define Core ({\mathcal{G}})=\{x\in \Re^{|N|}:x(N)=c_{D}(N)\ \mbox{and}\ \forall S\subseteq N,x(S)\le c_{D}(S)\} as the core of a traveling salesman game G{\mathcal{G}} . Okamoto (Discrete Appl. Math. 138:349-369, [2004]) conjectured that for the traveling salesman game G=(N,cD){\mathcal{G}}=(N,c_{D}) with D satisfying triangle inequality, the problem of testing whether Core (G)({\mathcal{G}}) is empty or not is NP\mathsf{NP} -hard. We prove that this conjecture is true. This result directly implies the NP\mathsf{NP} -hardness for the general case when D is asymmetric. We also study approximately fair cost allocations for these games. For this, we introduce the cycle cover games and show that the core of a cycle cover game is non-empty by finding a fair cost allocation vector in polynomial time. For a traveling salesman game, let \epsilon\mbox{-Core}({\mathcal{G}})=\{x\in \Re^{|N|}:x(N)\ge c_{D}(N) and ∀ S⊆N, x(S)≀Δ⋅c D (S)} be an Δ-approximate core, for a given Δ>1. By viewing an approximate fair cost allocation vector for this game as a sum of exact fair cost allocation vectors of several related cycle cover games, we provide a polynomial time algorithm demonstrating the non-emptiness of the log 2(|N|−1)-approximate core by exhibiting a vector in this approximate core for the asymmetric traveling salesman game. We improve it further by finding a (43log⁥3(∣N∣)+c)(\frac{4}{3}\log_{3}(|N|)+c) -approximate core in polynomial time for some constantc. We also show that there exists an Δ 0>1 such that it is NP\mathsf{NP} -hard to decide whether Δ 0-Core (G)({\mathcal{G}}) is empty or no

    A Deterministic {PTAS} for Commutative Rank of Matrix Spaces

    Get PDF

    An adaptive prefix-assignment technique for symmetry reduction

    Full text link
    This paper presents a technique for symmetry reduction that adaptively assigns a prefix of variables in a system of constraints so that the generated prefix-assignments are pairwise nonisomorphic under the action of the symmetry group of the system. The technique is based on McKay's canonical extension framework [J.~Algorithms 26 (1998), no.~2, 306--324]. Among key features of the technique are (i) adaptability---the prefix sequence can be user-prescribed and truncated for compatibility with the group of symmetries; (ii) parallelizability---prefix-assignments can be processed in parallel independently of each other; (iii) versatility---the method is applicable whenever the group of symmetries can be concisely represented as the automorphism group of a vertex-colored graph; and (iv) implementability---the method can be implemented relying on a canonical labeling map for vertex-colored graphs as the only nontrivial subroutine. To demonstrate the practical applicability of our technique, we have prepared an experimental open-source implementation of the technique and carry out a set of experiments that demonstrate ability to reduce symmetry on hard instances. Furthermore, we demonstrate that the implementation effectively parallelizes to compute clusters with multiple nodes via a message-passing interface.Comment: Updated manuscript submitted for revie

    A 7/9 - Approximation Algorithm for the Maximum Traveling Salesman Problem

    Full text link
    We give a 7/9 - Approximation Algorithm for the Maximum Traveling Salesman Problem.Comment: 6 figure

    Exponential Time Complexity of Weighted Counting of Independent Sets

    Full text link
    We consider weighted counting of independent sets using a rational weight x: Given a graph with n vertices, count its independent sets such that each set of size k contributes x^k. This is equivalent to computation of the partition function of the lattice gas with hard-core self-repulsion and hard-core pair interaction. We show the following conditional lower bounds: If counting the satisfying assignments of a 3-CNF formula in n variables (#3SAT) needs time 2^{\Omega(n)} (i.e. there is a c>0 such that no algorithm can solve #3SAT in time 2^{cn}), counting the independent sets of size n/3 of an n-vertex graph needs time 2^{\Omega(n)} and weighted counting of independent sets needs time 2^{\Omega(n/log^3 n)} for all rational weights x\neq 0. We have two technical ingredients: The first is a reduction from 3SAT to independent sets that preserves the number of solutions and increases the instance size only by a constant factor. Second, we devise a combination of vertex cloning and path addition. This graph transformation allows us to adapt a recent technique by Dell, Husfeldt, and Wahlen which enables interpolation by a family of reductions, each of which increases the instance size only polylogarithmically.Comment: Introduction revised, differences between versions of counting independent sets stated more precisely, minor improvements. 14 page

    On the orbit closure containment problem and slice rank of tensors

    Get PDF
    We consider the orbit closure containment problem, which, for a given vector and a group orbit, asks if the vector is contained in the closure of the group orbit. Recently, many algorithmic problems related to orbit closures have proved to be quite useful in giving polynomial time algorithms for special cases of the polynomial identity testing problem and several non-convex optimization problems. Answering a question posed by Wigderson, we show that the algorithmic problem corresponding to the orbit closure containment problem is NP-hard. We show this by establishing a computational equivalence between the solvability of homogeneous quadratic equations and a homogeneous version of the matrix completion problem, while showing that the latter is an instance of the orbit closure containment problem. Secondly, we consider the notion of slice rank of tensors, which was recently introduced by Tao, and has subsequently been used for breakthroughs in several combinatorial problems like capsets, sunflower free sets, tri-colored sum-free sets, and progression-free sets. We show that the corresponding algorithmic problem, which can also be phrased as a problem about union of orbit closures, is also NP-hard, hence answering an open question by BĂŒrgisser, Garg, Oliveira, Walter, and Wigderson. We show this by using a connection between the slice rank and the size of a minimum vertex cover of a hypergraph revealed by Tao and Sawin

    On the Trace of the Real Author

    Get PDF
    In pre-Revival Croatian literature there are works that so far have not been ascribed to any particular author. It is now clear that their real authors can not be identified simplyon the basis of general stylistic impression, as the late 19th century scholar Armin Pavić believed. The approach of Kolendić, who at the start of this century introduced the method of hapaxes (words evidenced only in the corpus of one known author) seemed much more promising. Trying to prove Vetranović‚s authorship of a part of the mythological drama Orfeo, he pointed out several words for which he claimed to be the hapaxes of the said poet. Even if the tenability of his conclusions about the Orfeo can be easily dismissed simply by using the Historical Dictionary of Croatian Language, the national literary historiography has accepted Kolendić‚s attribution. However, another attribution, based on the same method and proposed by the author of the present article, was rejected.Namely, after having found hapaxes of Zoranić‚s Planine in a pastoral eclogue by an unknown author, he attributed the eclogue to the same poet. The conclusion is self-evident. Every new method should be thoroughly tested. But, if no objection is found, in the following period it must be valid for all the cases, wherever it can be competently applied

    Fast Evaluation of Interlace Polynomials on Graphs of Bounded Treewidth

    Full text link
    We consider the multivariate interlace polynomial introduced by Courcelle (2008), which generalizes several interlace polynomials defined by Arratia, Bollobas, and Sorkin (2004) and by Aigner and van der Holst (2004). We present an algorithm to evaluate the multivariate interlace polynomial of a graph with n vertices given a tree decomposition of the graph of width k. The best previously known result (Courcelle 2008) employs a general logical framework and leads to an algorithm with running time f(k)*n, where f(k) is doubly exponential in k. Analyzing the GF(2)-rank of adjacency matrices in the context of tree decompositions, we give a faster and more direct algorithm. Our algorithm uses 2^{3k^2+O(k)}*n arithmetic operations and can be efficiently implemented in parallel.Comment: v4: Minor error in Lemma 5.5 fixed, Section 6.6 added, minor improvements. 44 pages, 14 figure

    Experimental discrimination of ion stopping models near the Bragg peak in highly ionized matter

    Get PDF
    The energy deposition of ions in dense plasmas is a key process in inertial confinement fusion that determines the α-particle heating expected to trigger a burn wave in the hydrogen pellet and resulting in high thermonuclear gain. However, measurements of ion stopping in plasmas are scarce and mostly restricted to high ion velocities where theory agrees with the data. Here, we report experimental data at low projectile velocities near the Bragg peak, where the stopping force reaches its maximum. This parameter range features the largest theoretical uncertainties and conclusive data are missing until today. The precision of our measurements, combined with a reliable knowledge of the plasma parameters, allows to disprove several standard models for the stopping power for beam velocities typically encountered in inertial fusion. On the other hand, our data support theories that include a detailed treatment of strong ion-electron collisions
    • 

    corecore